Robot Cars and the "Trolley Problem" - A Rant

Kinja'd!!! "mtdrift" (mtdrift)
10/26/2018 at 02:57 • Filed to: Rants, AI, Autonomous Vehicles, Self-driving Cars, Front Page

Kinja'd!!!4 Kinja'd!!! 12
Kinja'd!!!

Sparky, you dumb, dumb, fuck.

The trolley problem is one of the most flawed ethical thought experiments imaginable, and I’m sick and tired of seeing it trotted out as some kind of gold standard for understanding morality and moral decision-making, human or otherwise.

(I originally posted this !!!error: Indecipherable SUB-paragraph formatting!!! to !!!error: Indecipherable SUB-paragraph formatting!!! , but it got really long, so I decided to make it an Oppo post.)

!!! UNKNOWN CONTENT TYPE !!!

It’s mind-boggling to me that a study of this magnitude, with a data set this impressive (funded at the cost of, well, a lot, I assume), went forward with an underlying experiment that is fundamentally bullshit.

!!!error: Indecipherable SUB-paragraph formatting!!! has a more complete discussion about why the TP isn’t even worthy of your ass (see what I did there?). But, in short, it forces the subjects of the experiment into a purely utilitarian ethical universe where the actual range of human decision-making is arbitrarily delimited in Manichean ways that don’t make any sense, given the actual complexity of the real world (as so many other commenters on the post pointed out), and how actual humans behave in that world.

A thought experiment about morality that has minimal overlap with the real moral universe isn’t much of an experiment, especially when the researchers are suggesting that the conclusions should help to direct how we program our AI. And yet, here we are! Its either/or answer set makes the Trolley Problem fantastic for large-scale, statistically-driven quantitative social science like this, especially science that flattens cultural differences, but the results and conclusions derived of any bad experiment are, by their nature, going to be bad .

We prefer to run over fat people and cats instead of doctors and babies? Robot cars in China should favor saving old people over kids? These are the kinds of in-depth insights we’re supposed to glean from all this? This is the kind of survey data that’s supposed to inform AI about how to behave in the world on our behalf?

Give me a fucking break.

The whole point of AI-controlled autonomous vehicles is that they should be good enough to avoid the very situations posed in the Trolley Problem in the first place . And, I think that AI should be sophisticated enough to perceive the complexities of the world it operates in beyond arbitrary, utilitarian-universe ethical structures - just like humans do. Programming cats into the bottom of a rigid decision-making ethical hierarchy and babies at the top is the dumbest kind of AI I can imagine. I’m not getting in that car, no thank you.

Of course, engineers working on this kind of AI wouldn’t dream of doing it that way, and I think even now its development is well ahead of that curve. So, it all begs the question - what’s the point of a massive study like this, other than reporting on how captive subjects responded to questions about situations in a moral universe that doesn’t even really exist?

Don’t even get me started on whether or not we should consider robots ethical actors not.


DISCUSSION (12)


Kinja'd!!! Berang > mtdrift
10/26/2018 at 03:15

Kinja'd!!!4

TLDR.

My 2 cents: cars don’t run on tracks, there will invariably be a near infinite number of possible trajectories for the car to take should it need to avoid something, instead of just two.  This is after all, the main reason for having trackless vehicles.


Kinja'd!!! mtdrift > Berang
10/26/2018 at 03:26

Kinja'd!!!0

Exactly - these are the kinds of complexities that the real world presents to us and to self-driving cars. It’s NEVER an either/or outcome. So why base a morality study exclusively on either/or decision making? It doesn’t make any sense.


Kinja'd!!! facw > mtdrift
10/26/2018 at 03:29

Kinja'd!!!3

The thing that gets me , even if we set aside the whole synthetic nature of the problem, this decision would not be anything new. This is something drivers do every day. Except that human drivers probably never even get to the point of making any sort of rational decision, because it happens too suddenly for us, and most people are going to just make some panic move. As you note the real promise of AI drivers is that they should be able to see things sooner, and react quicker, with superior situational awareness, allowing them to avoid having to endanger people either outside or inside the vehicle (the best way to solve the T rolle y Problem is to find a way to stop the trolley).

We won’t be able to do that in every case, but we should be able to decrease incidents enough that we are better off for any reasonable approach. And if we are so concerned about what AI cars do in the situations we can’t avoid, maybe we need to add an ethics requirement for human drivers as well, so that we can drill what we want drivers to do into their heads.


Kinja'd!!! AestheticsInMotion > mtdrift
10/26/2018 at 03:37

Kinja'd!!!3

Clicks. Viewer engagement. Advertising dollars. 


Kinja'd!!! aquila121 > mtdrift
10/26/2018 at 04:47

Kinja'd!!!2

Yeah, but I love the trolley problem because it's part of a fantastic episode of "The  Good Place ."


Kinja'd!!! wafflesnfalafel > AestheticsInMotion
10/26/2018 at 04:52

Kinja'd!!!0

exactly...


Kinja'd!!! SilentButNotReallyDeadly...killed by G/O Media > mtdrift
10/26/2018 at 06:51

Kinja'd!!!2

My rule is...never swerve. The world is cruel. And my reaction times are not what they used to be.

So what was the problem that AI driving is supposed to solve? Efficient movement of humans? Or the safe movement of humans? You can’t have both... probably  because of the humans.


Kinja'd!!! My X-type is too a real Jaguar > aquila121
10/26/2018 at 07:44

Kinja'd!!!2

In the last simulation Chidi would kill 14 William Shakespeares to save one Santa Clause. 


Kinja'd!!! mtdrift > AestheticsInMotion
10/26/2018 at 08:06

Kinja'd!!!0

Wait, which one? Jason’s post? The Nature article? The study itself? Me?


Kinja'd!!! nermal > mtdrift
10/26/2018 at 08:21

Kinja'd!!!1

I think the answer to this is to allow the owners of autonomous cars to program the order in which they would like to run people and animals over.

One person may choose to save the ducks, but hit the old person. Another would hit a deer instead of a bicyclist.


Kinja'd!!! Clown Shoe Pilot > aquila121
10/26/2018 at 09:29

Kinja'd!!!2

For those that haven’t been watching...


Kinja'd!!! Urambo Tauro > mtdrift
10/26/2018 at 09:45

Kinja'd!!!2

The whole point of AI-controlled autonomous vehicles is that they should be good enough to avoid the very situations posed in the Trolley Problem in the first place .

^ THIS.

Also, the HAV version of the trolley problem is often presented in a way that chooses between harming bystanders versus harming occupants. Mark my words, NO manufacturer is going to offer a car that would sacrifice the occupants in the interest of public safety. It’s just not a very good selling point.